State Abstraction as Compression in Apprenticeship Learning
نویسندگان
چکیده
منابع مشابه
State Abstraction in MAXQ Hierarchical Reinforcement Learning
Many researchers have explored methods for hierarchical reinforcement learning (RL) with temporal abstractions, in which abstract actions are defined that can perform many primitive actions before terminating. However, little is known about learning with state abstractions, in which aspects of the state space are ignored. In previous work, we developed the MAXQ method for hierarchical RL. In th...
متن کاملBootstrapping Apprenticeship Learning
•We consider the problem of imitation learning where the examples, given by an expert, cover only a small part of a large state space. • Inverse Reinforcement Learning (IRL) provides an efficient tool for generalizing the partial demonstration, based on the assumption that the expert is maximizing an unknown utility function. • IRL consists in learning a reward function that explains the expert...
متن کاملStructured Apprenticeship Learning
We propose a graph-based algorithm for apprenticeship learning when the reward features are noisy. Previous apprenticeship learning techniques learn a reward function by using only local state features. This can be a limitation in practice, as often some features are misspecified or subject to measurement noise. Our graphical framework, inspired from the work on Markov Random Fields, allows to ...
متن کاملSemi-Supervised Apprenticeship Learning
In apprenticeship learning we aim to learn a good policy by observing the behavior of an expert or a set of experts. In particular, we consider the case where the expert acts so as to maximize an unknown reward function defined as a linear combination of a set of state features. In this paper, we consider the setting where we observe many sample trajectories (i.e., sequences of states) but only...
متن کاملSafety-Aware Apprenticeship Learning
Apprenticeship learning (AL) is a class of “learning from demonstrations” techniques where the reward function of a Markov Decision Process (MDP) is unknown to the learning agent and the agent has to derive a good policy by observing an expert’s demonstrations. In this paper, we study the problem of how to make AL algorithms inherently safe while still meeting its learning objective. We conside...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2019
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v33i01.33013134